OCPEDGE-1036: fix: latency tuning for the rt-kernel tests on AWS metal#30790
OCPEDGE-1036: fix: latency tuning for the rt-kernel tests on AWS metal#30790jeff-roche wants to merge 1 commit intoopenshift:mainfrom
Conversation
|
Pipeline controller notification For optional jobs, comment This repository is configured in: automatic mode |
|
@jeff-roche: This pull request references OCPEDGE-1036 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/test ? |
|
/test e2e-gcp-ovn-rt-upgrade |
|
/payload-job periodic-ci-openshift-release-main-nightly-4.22-upgrade-from-stable-4.21-e2e-metal-ovn-single-node-rt-upgrade |
|
@jeff-roche: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/1e7a2f80-0b96-11f1-9252-e810dd3e02ff-0 |
|
Scheduling required tests: |
|
The payload job fails to upgrade but the RT Tests themselves pass. Addressing the upgrade failures on #30608 |
|
/lgtm |
|
/payload-job periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-metal-ovn-single-node-rt-upgrade-test |
|
@jeff-roche: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/61996750-0c26-11f1-8539-63c794c57c62-0 |
|
/payload-job periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-metal-ovn-single-node-rt-upgrade-test |
|
@jeff-roche: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/9cdf16c0-11a8-11f1-9f71-c9b6ff3ae133-0 |
|
/payload-job periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-metal-ovn-single-node-rt-upgrade-test |
|
@jeff-roche: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/4d5afc30-1253-11f1-9df8-fe79311f410f-0 |
|
/payload-job periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-metal-ovn-single-node-rt-upgrade-test |
WalkthroughgetRealTimeWorkerNodes now returns a slice of node names and detects non-metal nodes (which forces all real-time thresholds to 7500µs). Real-time test runners were refactored to use per-test thresholds, capture outputs, produce structured latency analyses, and write timestamped per-test artifacts. Changes
Sequence Diagram(s)sequenceDiagram
participant Runner as "Test Runner"
participant Tool as "External Test Binary"
participant Parser as "parseLatencyResults"
participant Thresholds as "rtTestThresholds"
participant Artifact as "writeAnalysisArtifact / writeTestArtifacts"
participant Logger as "e2e.Logf"
Runner->>Tool: execute test (capture stdout/stderr)
Tool-->>Runner: JSON/text output
Runner->>Parser: send output + testName
Parser->>Thresholds: fetch soft/hard thresholds for testName
Thresholds-->>Parser: return thresholds
Parser-->>Runner: rtLatencyAnalysis (PASS/WARN/FAIL + metrics)
Runner->>Artifact: write json/log artifacts (timestamped)
Runner->>Logger: log summary, warnings, or errors
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@jeff-roche: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/44998af0-130a-11f1-9082-40e95c4d00b8-0 |
|
@jeff-roche: This pull request references OCPEDGE-1036 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
test/extended/kernel/common.go (1)
64-68: Side effect in getter function modifies global state.
getRealTimeWorkerNodesmodifies the globalrtTestThresholdsmap, which is unexpected for a function with a "get" prefix. This couples threshold configuration to node discovery and makes the behavior harder to reason about.Consider either:
- Renaming the function to reflect it configures thresholds (e.g.,
setupRealTimeWorkerNodes)- Returning the metal status and handling threshold adjustment at the call site
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/kernel/common.go` around lines 64 - 68, getRealTimeWorkerNodes currently mutates the global rtTestThresholds map (when nodesAreMetal is false), which is a surprising side effect for a getter; stop modifying rtTestThresholds inside getRealTimeWorkerNodes and instead either (A) rename getRealTimeWorkerNodes to setupRealTimeWorkerNodes if you intend it to configure thresholds, or (B) change getRealTimeWorkerNodes to only return the metal status (bool nodesAreMetal) and move the rtTestThresholds adjustments out to the call site so callers can set rtTestThresholds[test] = 7500 when nodesAreMetal is false; update all callers of getRealTimeWorkerNodes accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/extended/kernel/common.go`:
- Around line 57-68: The metal-detection currently uses node.GetLabels() and
sets nodesAreMetal = false if any worker node isn't metal, which incorrectly
flags clusters where non-RT workers are non-metal; update the logic so the metal
check is only performed for nodes that match the RT kernel condition (the same
condition used to select RT nodes) — i.e., inside the RT kernel match block
iterate those nodes, call node.GetLabels(), and only then modify nodesAreMetal
and adjust rtTestThresholds; reference variables/functions: node.GetLabels(),
nodesAreMetal, rtTestThresholds, and the RT kernel match condition so the
threshold padding runs only when RT nodes are detected as non-metal.
- Line 48: Replace the incorrect capacity argument on the nodes slice
allocation: the current call uses kubeNodes.Size() (which returns protobuf
serialized size) when constructing nodes via make([]string, 0, ...); change it
to use the number of items with len(kubeNodes.Items) so nodes = make([]string,
0, len(kubeNodes.Items)). Update the allocation site that references kubeNodes
and the nodes variable in test/extended/kernel/common.go (search for the
make([]string, 0, kubeNodes.Size()) occurrence).
In `@test/extended/kernel/tools.go`:
- Around line 165-167: The error message in runCyclictest incorrectly references
"oslat test"; update the returned fmt.Errorf string in the runCyclictest
function (where cpuCount is checked) to reference "cyclictest" (or
"runCyclictest") instead and preserve the numeric cpuCount interpolation and
wording; ensure only the test name in the message is changed so the check using
cpuCount and the fmt.Errorf call remain otherwise identical.
---
Nitpick comments:
In `@test/extended/kernel/common.go`:
- Around line 64-68: getRealTimeWorkerNodes currently mutates the global
rtTestThresholds map (when nodesAreMetal is false), which is a surprising side
effect for a getter; stop modifying rtTestThresholds inside
getRealTimeWorkerNodes and instead either (A) rename getRealTimeWorkerNodes to
setupRealTimeWorkerNodes if you intend it to configure thresholds, or (B) change
getRealTimeWorkerNodes to only return the metal status (bool nodesAreMetal) and
move the rtTestThresholds adjustments out to the call site so callers can set
rtTestThresholds[test] = 7500 when nodesAreMetal is false; update all callers of
getRealTimeWorkerNodes accordingly.
ℹ️ Review info
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (2)
test/extended/kernel/common.gotest/extended/kernel/tools.go
|
Scheduling required tests: |
8de1caf to
40cfb1d
Compare
|
@jeff-roche: This pull request references OCPEDGE-1036 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
1059dd7 to
ceb21f5
Compare
|
@jeff-roche: This pull request references OCPEDGE-1036 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/payload-job periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-metal-ovn-single-node-rt-upgrade-test |
|
@jeff-roche: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/7e364250-18ad-11f1-9802-7c1b2e727428-0 |
|
/test e2e-gcp-ovn-rt-upgrade |
|
Scheduling required tests: |
|
/retest |
1 similar comment
|
/retest |
|
/payload-job periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-metal-ovn-single-node-rt-upgrade-test |
|
@jeff-roche: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/09f90480-1962-11f1-9f39-e2de7efda1ab-0 |
Replace binary pass/fail latency detection with a three-tier analysis: - Two-tier thresholds (soft/hard) to distinguish warnings from failures - Statistical percentage-based detection (>5% CPUs over soft = systemic fail) - Structured JSON diagnostic artifacts for richer test result analysis Metal thresholds: oslat/cyclictest soft=100us hard=500us, hwlatdetect/deadline_test soft=100us hard=200us. Non-metal thresholds: soft=7500us hard=10000us. Unifies parseOslatResults and parseCyclictestResults into a single parseLatencyResults function with comprehensive statistics (max, avg, P99, per-CPU breakdown). Adds unit tests for the new parsing logic. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
ceb21f5 to
ced9954
Compare
|
/payload-job periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-metal-ovn-single-node-rt-upgrade-test |
|
@jeff-roche: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/dbcf1fa0-1c03-11f1-9335-cf4fd9a8554d-0 |
|
@jeff-roche: This pull request references OCPEDGE-1036 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
test/extended/kernel/tools_test.go (1)
33-187: Add regression coverage for the real policy boundary and default table.Right now the suite only exercises custom thresholds plus 1% and 10% soft-threshold cases. A drift in the documented 5% cutoff, or in the package defaults for
rtTestThresholds, would still pass. Please add cases for exactly 5% vs 6% and a small assertion on the default threshold map.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/kernel/tools_test.go` around lines 33 - 187, Add tests that cover the 5% boundary and the next integer (6%) and assert the package default threshold map; specifically add a test that builds a report where exactly 5% of CPUs exceed SoftThreshold (e.g., for 100 CPUs set 5 CPUs > SoftThreshold) and assert parseLatencyResults returns PASS (or WARN/Fail per policy) and another where 6% exceed and assert the opposite result, using parseLatencyResults and rtThresholdConfig to construct thresholds; also add a small test that asserts rtTestThresholds (the default threshold map) contains the expected default SoftThreshold/HardThreshold entries for the relevant tests to catch regressions in defaults.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/extended/kernel/tools.go`:
- Around line 91-120: runDeadlineTest and runHwlatdetect currently just pass a
single hard threshold and return on process exit, skipping the shared
PASS/WARN/FAIL analysis and not emitting an *_analysis.json; modify
runDeadlineTest and runHwlatdetect to feed the command output (res) and
thresholds (rtTestThresholds[testName]) into the same analysis routine used for
oslat/cyclictest (or implement a shared function analyzeTestOutput(output
string, thresholds Thresholds) that applies hard/soft thresholds and %‑over‑soft
rules), call writeTestArtifacts for both the raw log and the generated
<testName>_analysis.json, and return an error only when the analysis determines
FAIL (preserving existing error wrapping behavior using errors.Wrap with the
same messages).
- Around line 56-63: The systemic soft-threshold cutoff and per-test defaults
were loosened; revert them to the contract values by changing
maxSoftThresholdViolationPercent from 10.0 back to the contract value (e.g.,
5.0) and update rtTestThresholds so oslat and cyclictest use SoftThreshold: 100
(not 150) and all tests (deadline_test, oslat, cyclictest, hwlatdetect) use the
expected HardThreshold: 200 (not 500) so PASS/WARN/FAIL behavior matches the PR
contract.
---
Nitpick comments:
In `@test/extended/kernel/tools_test.go`:
- Around line 33-187: Add tests that cover the 5% boundary and the next integer
(6%) and assert the package default threshold map; specifically add a test that
builds a report where exactly 5% of CPUs exceed SoftThreshold (e.g., for 100
CPUs set 5 CPUs > SoftThreshold) and assert parseLatencyResults returns PASS (or
WARN/Fail per policy) and another where 6% exceed and assert the opposite
result, using parseLatencyResults and rtThresholdConfig to construct thresholds;
also add a small test that asserts rtTestThresholds (the default threshold map)
contains the expected default SoftThreshold/HardThreshold entries for the
relevant tests to catch regressions in defaults.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: bfd9f28e-5340-408c-82ed-35728aaf79fd
📒 Files selected for processing (3)
test/extended/kernel/common.gotest/extended/kernel/tools.gotest/extended/kernel/tools_test.go
|
@jeff-roche: This pull request references OCPEDGE-1036 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
Scheduling required tests: |
|
/retest |
|
/verified by payload job |
|
@jeff-roche: This PR has been marked as verified by DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-gcp-ovn-rt-rhcos10-techpreview |
|
@eggfoobar: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/3d688e50-1c80-11f1-8471-5c7c808c8675-0 |
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: eggfoobar, jeff-roche, qJkee The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/retest |
|
@jeff-roche: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Summary
Replaces the binary pass/fail latency detection with a smarter three-tier analysis that distinguishes real RT kernel issues from environmental noise (e.g., isolated single-CPU spikes on AWS metal instances).
Changes
Two-tier soft/hard thresholds:
Statistical percentage-based detection:
Structured JSON diagnostic artifacts:
_analysis.jsonartifact with: max, avg, P99 latency, per-CPU breakdown, soft/hard threshold counts, and overall result (PASS/WARN/FAIL)Thresholds
Code cleanup
parseOslatResultsandparseCyclictestResultsinto a singleparseLatencyResultsfunctionExpected behavior with real job data
Summary by CodeRabbit
Release Notes
New Features
Improvements
Tests